高性能跟踪四级车辆的控制是空中机器人技术的重要挑战。对称是物理系统的基本属性,并提供了为设计高性能控制算法提供工具的潜力。我们提出了一种采用任何给定对称性的设计方法,在一组坐标中将相关误差线性化,并使用LQR设计获得高性能控制;一种方法,我们将术语的调节器设计。我们表明,四极管车辆承认了几种不同的对称性:直接产物对称性,扩展姿势对称性和姿势和速度对称性,并表明每个对称性都可以用来定义全局误差。我们通过模拟比较线性化系统,发现扩展的姿势和姿势和速度对称性在存在大干扰的情况下优于直接产物对称性。这表明对称性对称性和组仿射对称性的选择有改善的线性化误差。
translated by 谷歌翻译
Visual Inertial Odometry (VIO) is the problem of estimating a robot's trajectory by combining information from an inertial measurement unit (IMU) and a camera, and is of great interest to the robotics community. This paper develops a novel Lie group symmetry for the VIO problem and applies the recently proposed equivariant filter. The symmetry is shown to be compatible with the invariance of the VIO reference frame, lead to exact linearisation of bias-free IMU dynamics, and provide equivariance of the visual measurement function. As a result, the equivariant filter (EqF) based on this Lie group is a consistent estimator for VIO with lower linearisation error in the propagation of state dynamics and a higher order equivariant output approximation than standard formulations. Experimental results on the popular EuRoC and UZH FPV datasets demonstrate that the proposed system outperforms other state-of-the-art VIO algorithms in terms of both speed and accuracy.
translated by 谷歌翻译
杂交和集合学习技术是改善预测方法的预测能力的流行模型融合技术。通过有限的研究,将这两种有前途的方法结合在一起,本文着重于不同合奏的基础模型池中指数平滑的旋转神经网络(ES-RNN)的实用性。我们将某些最先进的结合技术和算术模型平均作为基准进行比较。我们对M4预测数据集进行了100,000个时间序列,结果表明,基于特征的预测模型平均(FFORFORA)平均是与ES-RNN的晚期数据融合的最佳技术。但是,考虑到M4的每日数据子集,堆叠是处理所有基本模型性能相似的情况下唯一成功的合奏。我们的实验结果表明,与N-Beats作为基准相比,我们达到了艺术的预测结果。我们得出的结论是,模型平均比模型选择和堆叠策略更强大。此外,结果表明,提高梯度对于实施合奏学习策略是优越的。
translated by 谷歌翻译
卷积神经网络(CNN)的泛化性能受训练图像的数量,质量和品种的影响。必须注释训练图像,这是耗时和昂贵的。我们工作的目标是减少培训CNN所需的注释图像的数量,同时保持其性能。我们假设通过确保该组训练图像包含大部分难以分类的图像,可以更快地提高CNN的性能。我们的研究目的是使用活动学习方法测试这个假设,可以自动选择难以分类的图像。我们开发了一种基于掩模区域的CNN(掩模R-CNN)的主动学习方法,并命名此方法Maskal。 Maskal涉及掩模R-CNN的迭代训练,之后培训的模型用于选择一组未标记的图像,该模型是不确定的。然后将所选择的图像注释并用于恢复掩模R-CNN,并且重复这一点用于许多采样迭代。在我们的研究中,掩模R-CNN培训由由12个采样迭代选择的2500个硬花甘蓝图像,从训练组14,000个硬花甘蓝图像的训练组中选择了12个采样迭代。对于所有采样迭代,Maskal比随机采样显着更好。此外,在抽样900图像之后,屏蔽具有相同的性能,随着随机抽样在2300张图像之后。与在整个培训集(14,000张图片)上培训的面具R-CNN模型相比,Maskal达到其性能的93.9%,其培训数据的17.9%。随机抽样占其性能的81.9%,占其培训数据的16.4%。我们得出结论,通过使用屏马,可以减少注释工作对于在西兰花的数据集上训练掩模R-CNN。我们的软件可在https://github.com/pieterblok/maskal上找到。
translated by 谷歌翻译
心电图(ECG)是一种有效且无侵入性诊断工具,可测量心脏的电活动。解释ECG信号检测各种异常是一个具有挑战性的任务,需要专业知识。最近,利用深度神经网络的ECG分类来帮助医疗从业者变得流行,但他们的黑匣子自然妨碍了临床实施。已经提出了几种基于显着性的可解释性技术,但它们仅表明重要特征的位置而不是实际功能。我们提出了一种名为QLST的新型解释性技术,一种基于查询的潜空间遍历技术,可以提供对任何ECG分类模型的解释。使用QLST,我们训练一个神经网络,该网络网络学习在大学医院数据集训练的变分性AutoEncoder的潜在空间中,超过80万家ECG为28个疾病。我们通过实验证明我们可以通过通过这些遍历来解释不同的黑匣子分类器。
translated by 谷歌翻译
我们调查预测中的合奏技术,并检查其使用与Covid-19大流行早期类似的非季度时间系列的潜力。开发改进的预测方法是必不可少的,因为它们在关键阶段为组织和决策者提供数据驱动的决策。我们建议使用后期数据融合,使用两个预测模型的堆叠集合和两个元特征,并在初步预测阶段证明其预测力。最终的集合包括先知和长期短期内存(LSTM)神经网络作为基础模型。基础模型由多层的Perceptron(MLP)组合,考虑到元素,表示与每个基础模型的预测精度最高的相关性。我们进一步表明,包含Meta-Features通常会在七和十四天的两个预测视野中提高集合的预测准确性。该研究强化了以前的工作,并展示了与深层学习模型相结合的传统统计模型的价值,以生产更多来自不同领域和季节性的时间序列的预测模型。
translated by 谷歌翻译
机器学习研究取决于客观解释,可比和可重复的算法基准。我们倡导使用策划,全面套房的机器学习任务,以标准化基准的设置,执行和报告。我们通过帮助创建和利用这些基准套件的软件工具来实现这一目标。这些无缝集成到OpenML平台中,并通过Python,Java和R. OpenML基准套件(A)的接口访问,易于使用标准化的数据格式,API和客户端库; (b)附带的数据集具有广泛的元信息; (c)允许在未来的研究中共享和重复使用基准。然后,我们为分类提供了一个仔细的策划和实用的基准测试套件:OpenML策划分类基准测试套件2018(OpenML-CC18)。最后,我们讨论了使用案例和应用程序,这些案例和应用程序尤其展示了OpenML基准套件和OpenML-CC18的有用性。
translated by 谷歌翻译
Using massive datasets to train large-scale models has emerged as a dominant approach for broad generalization in natural language and vision applications. In reinforcement learning, however, a key challenge is that available data of sequential decision making is often not annotated with actions - for example, videos of game-play are much more available than sequences of frames paired with their logged game controls. We propose to circumvent this challenge by combining large but sparsely-annotated datasets from a \emph{target} environment of interest with fully-annotated datasets from various other \emph{source} environments. Our method, Action Limited PreTraining (ALPT), leverages the generalization capabilities of inverse dynamics modelling (IDM) to label missing action data in the target environment. We show that utilizing even one additional environment dataset of labelled data during IDM pretraining gives rise to substantial improvements in generating action labels for unannotated sequences. We evaluate our method on benchmark game-playing environments and show that we can significantly improve game performance and generalization capability compared to other approaches, using annotated datasets equivalent to only $12$ minutes of gameplay. Highlighting the power of IDM, we show that these benefits remain even when target and source environments share no common actions.
translated by 谷歌翻译
Humans are excellent at understanding language and vision to accomplish a wide range of tasks. In contrast, creating general instruction-following embodied agents remains a difficult challenge. Prior work that uses pure language-only models lack visual grounding, making it difficult to connect language instructions with visual observations. On the other hand, methods that use pre-trained vision-language models typically come with divided language and visual representations, requiring designing specialized network architecture to fuse them together. We propose a simple yet effective model for robots to solve instruction-following tasks in vision-based environments. Our \ours method consists of a multimodal transformer that encodes visual observations and language instructions, and a policy transformer that predicts actions based on encoded representations. The multimodal transformer is pre-trained on millions of image-text pairs and natural language text, thereby producing generic cross-modal representations of observations and instructions. The policy transformer keeps track of the full history of observations and actions, and predicts actions autoregressively. We show that this unified transformer model outperforms all state-of-the-art pre-trained or trained-from-scratch methods in both single-task and multi-task settings. Our model also shows better model scalability and generalization ability than prior work.
translated by 谷歌翻译